11 research outputs found
Gabor Filter Assisted Energy Efficient Fast Learning Convolutional Neural Networks
Convolutional Neural Networks (CNN) are being increasingly used in computer
vision for a wide range of classification and recognition problems. However,
training these large networks demands high computational time and energy
requirements; hence, their energy-efficient implementation is of great
interest. In this work, we reduce the training complexity of CNNs by replacing
certain weight kernels of a CNN with Gabor filters. The convolutional layers
use the Gabor filters as fixed weight kernels, which extracts intrinsic
features, with regular trainable weight kernels. This combination creates a
balanced system that gives better training performance in terms of energy and
time, compared to the standalone CNN (without any Gabor kernels), in exchange
for tolerable accuracy degradation. We show that the accuracy degradation can
be mitigated by partially training the Gabor kernels, for a small fraction of
the total training cycles. We evaluated the proposed approach on 4 benchmark
applications. Simple tasks like face detection and character recognition (MNIST
and TiCH), were implemented using LeNet architecture. While a more complex task
of object recognition (CIFAR10) was implemented on a state of the art deep CNN
(Network in Network) architecture. The proposed approach yields 1.31-1.53x
improvement in training energy in comparison to conventional CNN
implementation. We also obtain improvement up to 1.4x in training time, up to
2.23x in storage requirements, and up to 2.2x in memory access energy. The
accuracy degradation suffered by the approximate implementations is within 0-3%
of the baseline.Comment: Accepted in ISLPED 201
Significance Driven Hybrid 8T-6T SRAM for Energy-Efficient Synaptic Storage in Artificial Neural Networks
Multilayered artificial neural networks (ANN) have found widespread utility
in classification and recognition applications. The scale and complexity of
such networks together with the inadequacies of general purpose computing
platforms have led to a significant interest in the development of efficient
hardware implementations. In this work, we focus on designing energy efficient
on-chip storage for the synaptic weights. In order to minimize the power
consumption of typical digital CMOS implementations of such large-scale
networks, the digital neurons could be operated reliably at scaled voltages by
reducing the clock frequency. On the contrary, the on-chip synaptic storage
designed using a conventional 6T SRAM is susceptible to bitcell failures at
reduced voltages. However, the intrinsic error resiliency of NNs to small
synaptic weight perturbations enables us to scale the operating voltage of the
6TSRAM. Our analysis on a widely used digit recognition dataset indicates that
the voltage can be scaled by 200mV from the nominal operating voltage (950mV)
for practically no loss (less than 0.5%) in accuracy (22nm predictive
technology). Scaling beyond that causes substantial performance degradation
owing to increased probability of failures in the MSBs of the synaptic weights.
We, therefore propose a significance driven hybrid 8T-6T SRAM, wherein the
sensitive MSBs are stored in 8T bitcells that are robust at scaled voltages due
to decoupled read and write paths. In an effort to further minimize the area
penalty, we present a synaptic-sensitivity driven hybrid memory architecture
consisting of multiple 8T-6T SRAM banks. Our circuit to system-level simulation
framework shows that the proposed synaptic-sensitivity driven architecture
provides a 30.91% reduction in the memory access power with a 10.41% area
overhead, for less than 1% loss in the classification accuracy.Comment: Accepted in Design, Automation and Test in Europe 2016 conference
(DATE-2016
Exploration of Energy Efficient Hardware and Algorithms for Deep Learning
Deep Neural Networks (DNNs) have emerged as the state-of-the-art technique in a wide range of machine learning tasks for analytics and computer vision in the next generation of embedded (mobile, IoT, wearable) devices. Despite their success, they suffer from high energy requirements both in inference and training. In recent years, the inherent error resiliency of DNNs has been exploited by introducing approximations at either the algorithmic or the hardware levels (individually) to obtain energy savings while incurring tolerable accuracy degradation. We perform a comprehensive analysis to determine the effectiveness of cross-layer approximations for the energy-efficient realization of large-scale DNNs. Our experiments on recognition benchmarks show that cross-layer approximation provides substantial improvements in energy efficiency for different accuracy/quality requirements. Furthermore, we propose a synergistic framework for combining the approximation techniques. To reduce the training complexity of Deep Convolutional Neural Networks (DCNN), we replace certain weight kernels of convolutional layers with Gabor filters. The convolutional layers use the Gabor filters as fixed weight kernels, which extracts intrinsic features, with regular trainable weight kernels. This combination creates a balanced system that gives better training performance in terms of energy and time, compared to the standalone Deep CNN (without any Gabor kernels), in exchange for tolerable accuracy degradation. We also explore an efficient training methodology and incrementally growing a DCNN to allow new classes to be learned while sharing part of the base network. Our approach is an end-to-end learning framework, where we focus on reducing the incremental training complexity while achieving accuracy close to the upper-bound without using any of the old training samples. We have also explored spiking neural networks for energy-efficiency. Training of deep spiking neural networks from direct spike inputs is difficult since its temporal dynamics are not well suited for standard supervision based training algorithms used to train DNNs. We propose a spike-based backpropagation training methodology for state-of-the-art deep Spiking Neural Network (SNN) architectures. This methodology enables real-time training in deep SNNs while achieving comparable inference accuracies on standard image recognition tasks